Program counter

The program counter (PC), commonly called the instruction pointer (IP) in Intel x86 microprocessors, and sometimes called the instruction address register,[1] or just part of the instruction sequencer[2] in some computers, is a processor register that indicates where the computer is in its instruction sequence. Depending on the details of the particular computer, the PC or IP holds either the memory address of the instruction being executed, or the address of the next instruction to be executed.

In most processors, the program counter is incremented automatically after fetching a program instruction, so that instructions are normally retrieved sequentially from memory, with certain instructions, such as branches, jumps and subroutine calls and returns, interrupting the sequence by placing a new value in the program counter.

Such jump instructions allow a new address to be chosen as the start of the next part of the flow of instructions from the memory. They allow new values to be loaded (written) into the program counter register. A subroutine call is achieved simply by reading the old contents of the program counter, before they are overwritten by a new value, and saving them somewhere in memory or in another register. A subroutine return is then achieved by writing the saved value back in to the program counter again.

Contents

Working of a simple program counter

The central processing unit (CPU) of a simple computer contains the hardware (control unit and arithmetic logic unit) that executes the instructions, as they are fetched from memory. Most instruction cycles[3] consist of the CPU sending an address, on the address bus, to memory, which then responds by sending the contents of that memory location as data, on the data bus. (This is tied up with the idea of the stored-program computer in which executable instructions are stored alongside ordinary data in memory, and handled identically by it[4]). The Program Counter (PC) is just one of the many registers in the hardware of the CPU. It, like each of the other registers, consists of a bank of binary latches (a binary latch is also known as a flip-flop), with one flip-flop per bit in the integer that is to be stored[5] (32 for a 32-bit CPU, for example). In the case of the PC, the integer represents the address in memory that is to be fetched next.

Once the data (the instruction) has been received on the data bus, the PC is incremented. In some CPUs this is achieved by adding 000..001 to its contents, and latching the result into the register to be its new contents; on most CPUs, though, the PC is implemented as a register that is internally wired so that it counts up to the next value when a specific signal is applied to it externally.[6] Such a register, in electronics, is referred to as a binary counter, and hence the origin of the term program counter.

The all-pervading nature of the program counter

The presence of the program counter in the CPU has far reaching consequences on our way of thinking when we program computers. Indeed, the program counter (or any equivalent block of hardware that serves the same purpose[7]) is very much central to the von Neumann architecture.

The PC imposes a strict sequential ordering on the fetching of instructions from memory (the flow of control), even where no sequentiality is implied by the algorithm itself (the von Neumann bottleneck). This is why research into possible models for parallel computing considered,[8] at one point, other non von Neumann or dataflow models that did not use a program counter. For example, functional programming languages offered much hope at the high level, with combinatory logic at the assembler level. Even then, most of the researchers emulated this in the microcode of conventional computers (hence still involving a program counter in the hardware); but, in fact, combinators are so simple, they could, in principle, be implemented directly in the hardware without recourse to microcode or program counters at all.

In the end, though, the results of that research fed back, instead, into ways of improving the execution speed of conventional processors. Ways were found for organising out-of-order execution so as to extract the sequencing information that is implicit in the data. Also, the pipeline and very long instruction word (VLIW) organisations allowed the compiler to arrange for multiple calculations to be set off in parallel. At the start of each instruction execution, though, the instruction needs to be fetched from memory, and this is initiated by an instruction fetch cycle that gets the instructions, one at a time, as directed by the program counter.

Even high level programming languages have the program-counter concept engrained deep down in their behavior. You need only to watch how a programmer develops or debugs a computer program to see evidence of this, with the programmer using a finger to point to successive lines in the program to model the steps of its execution. Indeed, a high level programming language is no less than the assembler language of a high level virtual machine[9] -- a computer that would be too complex to be cost-effective to build directly in hardware, so is implemented instead using multiple shells of emulation (with the compiler or interpreter providing the higher levels, and the microcode providing the lower levels).

See also

References

  1. ^ Carver Mead and Lynn Conway (1980), Introduction to VLSI Systems, Addison-Wesley, Reading, USA, ISBN 0-201-04358-0
  2. ^ Harry Katzan (1971), Computer Organization and the System/370, Van Nostrand Reinhold Company, New York, USA, LCCCN 72-153191
  3. ^ John L. Hennessy and David A. Patterson (1990), Computer Architecture: a quantitative approach, Morgan Kaufmann Publishers, Palo Alto, USA, ISBN 1-55860-069-8
  4. ^ B. Randall (1982), The Origins of Digital Computers, Springer-Verlag, Berlin, D
  5. ^ C. Gordon Bell and Allen Newell (1971), Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York, USA
  6. ^ B.S.Walker (1967), Introduction to Computer Engineering, University of London Press, London, UK, SBN 340 06831 0
  7. ^ Example of an alternative, somewhat blunt, but otherwise equivalent, arrangement (The Story of Mel)
  8. ^ F.B. Chambers, D.A. Duce and G.P. Jones (1984), Distributed Computing, Academic Press, Orlando, USA, ISBN 0-12-167350-2
  9. ^ Douglas Hofstadter (1980), Gödel, Escher, Bach: an eternal golden braid, Penguin Books, Harmondsworth, UK, ISBN 0-14-005579-7

External links